A New Statistical Model for Evaluation Interactive Question Answering Systems Using Regression
Authors
Abstract:
The development of computer systems and extensive use of information technology in the everyday life of people have just made it more and more important for them to make quick access to information that has received great importance. Increasing the volume of information makes it difficult to manage or control. Thus, some instruments need to be provided to use this information. The QA system is an automated system for obtaining the correct answers to questions posed by the human in the natural language. In these systems, if the response is found, and if it is not the user's expected response or if it needs more information, there is no possibility of exchanging information between the system and the user to ask more questions and get answers related to it. To solve this problem, interactive Question answering (IQA) systems were created. Interactive question answering (IQA) systems are associated with linguistic ambiguous structures, so these systems are more accurate than QA systems. Regarding the probability of ambiguity (ambiguity in the user question or ambiguity in the answer provided by the system), the repetition is possible in these systems to obtain the clarity. No standard methods have been developed on IQA systems evaluation, and the existing evaluation methods have been developed based on the methods used in QA and dialogue systems. In evaluating IQA systems, in addition to quantitative evaluation, a qualitative evaluation is used. It requires users’ participation in the evaluation process to determine the success level of interaction between the system and the user. Evaluation plays an important role in the IQA systems. In the context of evaluating IQA systems, there is partially no specific methodology for evaluating these systems in general. The main problem with designing an assessment method for IQA systems lies in the rare possibility to predict the interaction part. To this end, human needs to be involved in the evaluation process. In this paper, an appropriate model is presented by introducing a set of built-in features for evaluating IQA systems. To conduct the evaluation process, four IQA systems were considered based on the conversation exchanged between users and systems. Moreover, 540 samples were considered as suitable data to create a test and training set. The statistical characteristics of each conversation were extracted after performing the preprocessing on them. Then a feature matrix was formed based on the obtained characteristics. Finally, using linear and nonlinear regression, human thinking was predicted. As a result, the nonlinear power regression with 0.13 Root Mean Square Error (RMSE) was the best model.
similar resources
User-Centered Evaluation Of Interactive Question Answering Systems
We describe a large-scale evaluation of four interactive question answering system with real users. The purpose of the evaluation was to develop evaluation methods and metrics for interactive QA systems. We present our evaluation method as a case study, and discuss the design and administration of the evaluation components and the effectiveness of several evaluation techniques with respect to t...
full textUsing interview data to identify evaluation criteria for interactive, analytical question-answering systems
full text
Evaluation for Scenario Question Answering Systems
Scenario Question Answering is a relatively new direction in Question Answering (QA) research that presents a number of challenges for evaluation. In this paper, we propose a comprehensive evaluation strategy for Scenario QA, including a methodology for building reusable test collections for Scenario QA and metrics for evaluating system performance over such test collections. Using this methodo...
full textOptimizing question answering systems by Accelerated Particle Swarm Optimization (APSO)
One of the most important research areas in natural language processing is Question Answering Systems (QASs). Existing search engines, with Google at the top, have many remarkable capabilities. But there is a basic limitation (search engines do not have deduction capability), a capability which a QAS is expected to have. In this perspective, a search engine may be viewed as a semi-mechanized QA...
full textConnecting Question Answering and Conversational Agents Contextualizing German Questions for Interactive Question Answering Systems
Research results in the field of Question Answering (QA) have shown that the classification of natural language questions significantly contributes to the accuracy of the generated answers. In this paper we present an approach which extends the prevalent question classification techniques by additionally considering further contextual information provided by the questions. Thereby we focus on i...
full textMy Resources
Journal title
volume 16 issue 3
pages 48- 37
publication date 2019-12
By following a journal you will be notified via email when a new issue of this journal is published.
No Keywords
Hosted on Doprax cloud platform doprax.com
copyright © 2015-2023